24 research outputs found

    European guidelines for quality assurance in colorectal cancer screening and diagnosis. First Edition Introduction

    No full text
    Multidisciplinary, evidence-based guidelines for quality assurance in colorectal cancer screening and diagnosis have been developed by experts in a project coordinated by the International Agency for Research on Cancer. The full guideline document covers the entire process of population-based screening. It consists of 10 chapters and over 250 recommendations, graded according to the strength of the recommendation and the supporting evidence. The 450-page guidelines and the extensive evidence base have been published by the European Commission. The first chapter deals with the evidence for the effectiveness of CRC screening; key operational parameters such as age-range, interval between two negative screening examinations, and some combinations of tests; and cost-effectiveness. The content of the chapter is presented here to promote international discussion and collaboration by making the principles and standards recommended in the new EU Guidelines known to a wider professional and scientific community. Following these recommendations has the potential to enhance the control of colorectal cancer through improvement in the quality and effectiveness of the screening process, including multi-disciplinary diagnosis and management of the disease. © Georg Thieme Verlag KG Stuttgart, New York

    Cost effectiveness of surveillance for GI cancers

    Get PDF
    Gastrointestinal (GI) diseases are among the leading causes of death in the world. To reduce the burden of GI diseases, surveillance is recommended for some diseases, including for patients with inflammatory bowel diseases, Barrett's oesophagus, precancerous gastric lesions, colorectal adenoma, and pancreatic neoplasms. This review aims to provide an overview of the evidence on cost-effectiveness of surveillance of individuals with GI conditions predisposing them to cancer, specifically focussing on the aforementioned conditions. We searched the literature and reviewed 21 studies. Despite heterogeneity of studies in terms of settings, study populations, surveillance strategies and outcomes, most reviewed studies suggested at least some surveillance of patients with these GI conditions to be cost-effective. For some high-risk conditions frequent surveillance with 3-month intervals was warranted, while for other conditions, surveillance may only be cost-effective every 10 years. Further studies based on more robust effectiveness evidence are needed to inform and optimise surveillance programmes for GI cancers

    Development of new non-invasive tests for colorectal cancer screening: The relevance of information on adenoma detection

    No full text
    Researchers are actively pursuing the development of a new non-invasive test (NIT) for colorectal cancer (CRC) screening as an alternative to fecal occult blood tests (FOBTs). The majority of pilot studies focus on the detection of invasive CRC rather than precursor lesions (i.e., adenomas). We aimed to explore the relevance of adenoma detection for the viability of an NIT for CRC screening by considering a hypothetical test that does not detect adenomas beyond chance. We used the Simulation Model of Colorectal Cancer (SimCRC) to estimate the effectiveness of CRC screening and the lifetime costs (payers' perspective) for a cohort of US 50-years-old persons to whom CRC screening is offered from age 50–75. We compared annual screening with guaiac and immunochemical FOBTs (with sensitivities up to 70 and 24% for CRC and adenomas, respectively) to annual screening with a hypothetical NIT (sensitivity of 90% for CRC, no detection of adenomas beyond chance, specificity and cost similar to FOBTs). Screening with the NIT was not more effective, but was 29–44% more costly than screening with FOBTs. The findings were robust to varying the screening interval, the NIT's sensitivity for CRC, adherence rates favoring the NIT, and the NIT's unit cost. A comparative modelling approach using a model that assumes a shorter adenoma dwell time (MISCAN-COLON) confirmed the superiority of the immunochemical FOBT over an NIT with no ability to detect adenomas. Information on adenoma detection is crucial to determine whether a new NIT is a viable alternative to FOBTs for CRC screening. Current evidence thus lacks an important piece of information to identify marker candidates that hold real promise and deserve further (large-scale) evaluation

    Colorectal cancer mortality prevented by use and attributable to nonuse of colonoscopy

    No full text
    Background: Use of colonoscopy is thought to reduce colorectal cancer (CRC) mortality, but its impact at the population level is unclear. Objective: To estimate the effect of current colonoscopy use on CRC mortality and its further potential in reducing CRC mortality. Design: Population-level analysis was performed by using the concepts of prevented and attributable fractions, by using data from the National Health Interview Survey, the Surveillance, Epidemiology and End Results Program, and estimates of the effectiveness of colonoscopy at reducing CRC mortality. Setting: The 2005 U.S. population aged 50 years and older. Exposure: Colonoscopy within 10 years or less. Main Outcome Measurements: Percentages and absolute numbers of CRC deaths prevented and potentially preventable by colonoscopy. Limitations: Uncertainty in effectiveness estimates. Results: Overall, the proportions of CRC deaths in 2005 prevented by colonoscopy (ie, the prevented fractions) range from 13% (95% CI, 11%-15%) to 19% (95% CI, 12%-24%) across the estimates of colonoscopy effectiveness. Corresponding numbers of CRC deaths prevented range from 7314 (95% CI, 6010-8467) to 11,711 (95% CI, 7077-14,898). The proportions of CRC deaths attributable to nonuse of colonoscopy (ie, the attributable fractions) range from 28% (95% CI, 22%-33%) to 44% (95% CI, 24%-60%), depending on the assumed effectiveness. Corresponding numbers of CRC deaths attributed to nonuse of colonoscopy range from 13,796 (95% CI, 11,076-16,255) to 22,088 (95% CI, 12,189-29,947). Conclusions: Although we estimate that colonoscopy has prevented substantial numbers of CRC deaths, many more deaths could have been prevented with more widespread use. These findings highlight the potential benefits from public health interventions to increase the use of screening colonoscopy. © 2011 American Society for Gastrointestinal Endoscopy

    State disparities in colorectal cancer mortality patterns in the United States

    No full text
    Background: Colorectal cancer (CRC) mortality rates have been decreasing for many decades in the United States, with the decrease accelerating in the most recent time period. The extent to which this decrease varies across states and its influence on the geographic patterns of rates is unknown. Methods: We analyzed the temporal trend in age-standardized CRC death rates for each state from 1990 to 2007 using joinpoint regression. We also examined the change in death rates between 1990-1994 and 2003-2007 using rate ratios with 95% confidence intervals and illustrated the change in pattern using maps. The relationship between the change in mortality rates and CRC screening rates for 2004 by state was examined using Pearson's correlation. Results: CRC mortality rates significantly decreased in all states except Mississippi between 1990 and 2007 based on the joinpoint model. The decrease in death rates between 1990-1994 and 2003-2007 ranged from 9% in Alabama to greater than 33% in Massachusetts, Rhode Island, New York, and Alaska; Mississippi and Wyoming showed no significant decrease. Generally, the northeastern states showed the largest decreases, whereas southern states showed the smallest decreases. The highest CRC mortality rates shifted from the northeastern states during 1990 to 1994 to the southern states along the Appalachian corridor during 2003 to 2007. The decrease in CRC mortality rates by state correlated strongly with uptake of screening (r = -0.65, P < 0.0001). Conclusions: Progress in reducing CRC mortality varies across states, with the Northeast showing the most progress and the South showing the least progress. Impact: These findings highlight the need for wider dissemination of CRC screening

    The Impact of Uncertainty in Barrett's Esophagus Progression Rates on Hypothetical Screening and Treatment Decisions

    No full text
    Background: Estimates for the annual progression rate from Barrett's esophagus (BE) to esophageal adenocarcinoma (EAC) vary widely. In this explorative study, we quantified how this uncertainty affects the estimates of effectiveness and efficiency of screening and treatment for EAC. Design: We developed 3 versions of the University of Washington / Microsimulation Screening Analysis-EAC model. The models differed with respect to the annual progression rate from BE to EAC (0.12% or 0.42%) and the possibility of spontaneous regression of dysplasia (yes or no). All versions of the model were calibrated to the observed Surveillance, Epidemiology, and End Results esophageal cancer incidence rates from 1998 to 2009. To identify the impact of natural history, we estimated the incidence and deaths prevented as well as numbers needed to screen (NNS) and treat (NNT) of a one-time perfect screening at age 65 years that detected all prevalent BE cases, followed by a perfect treatment intervention. Results: Assuming a perfect screening and treatment intervention for all patients with BE, the maximum EAC mortality reduction (64%-66%) and the NNS per death prevented (470-510) were similar across the 3 model versions. However, 3 times more people needed to be treated to prevent 1 death (24 v. 8) in the 0.12% regression model compared with the 0.42% progression model. Restricting treatment to those with dysplasia or only high-grade dysplasia resulted in smaller differences in NNT (2-3 to prevent one EAC case) but wider variation in effectiveness (mortality reduction of 15%-24%). Conclusion: The uncertainty in the natural history of the BE to EAC sequence influenced the estimates of effectiveness and efficiency of BE screening and treatment considerably. This uncertainty could seriously hamper decision making about implementing BE screening and treatment interventions

    Should colorectal cancer screening be considered in elderly persons without previous screening?: A cost-effectiveness analysis

    No full text
    Background: The U.S. Preventive Services Task Force recommends against routine screening for colorectal cancer (CRC) in adequately screened persons older than 75 years but does not address the appropriateness of screening in elderly persons without previous screening. Objective: To determine at what ages CRC screening should be considered in unscreened elderly persons and to determine which test is indicated at each age. Design: Microsimulation modeling study. Data Sources: Observational and experimental studies. Target Population: Unscreened persons aged 76 to 90 years with no, moderate, and severe comorbid conditions. Time Horizon: Lifetime. Perspective: Societal. Intervention: One-time colonoscopy, sigmoidoscopy, or fecal immunochemical test (FIT) screening. Outcome Measures: Quality-adjusted life-years gained, costs, and costs per quality-adjusted life-year gained. Results of Base-Case Analysis: In unscreened elderly persons with no comorbid conditions, CRC screening was cost-effective up to age 86 years. Screening with colonoscopy was indicated up to age 83 years, sigmoidoscopy was indicated at age 84 years, and FIT was indicated at ages 85 and 86 years. In unscreened persons with moderate comorbid conditions, screening was cost-effective up to age 83 years (colonoscopy indicated up to age 80 years, sigmoidoscopy at age 81 years, and FIT at ages 82 and 83 years). In unscreened persons with severe comorbid conditions, screening was cost-effective up to age 80 years (colonoscopy indicated up to age 77 years, sigmoidoscopy at age 78 years, and FIT at ages 79 and 80 years). Results of Sensitivity Analyses: Results were most sensitive to assuming a lower willingness to pay per quality-adjusted life-year gained. Limitation: Only persons at average risk for CRC were considered. Conclusion: In unscreened elderly persons CRC screening should be considered well beyond age 75 years. A colonoscopy is indicated at most ages. © 2014 American College of Physicians

    The appropriateness of more intensive colonoscopy screening than recommended in medicare beneficiaries: A modeling study

    No full text
    Importance: Many Medicare beneficiaries undergo more intensive colonoscopy screening than recommended. Whether this is favorable for beneficiaries and efficient from a societal perspective is uncertain.Objective: To determine whether more intensive colonoscopy screening than recommended is favorable for Medicare beneficiaries (ie, whether it results in a net health benefit) and whether it is efficient from a societal perspective (ie, whether the net health benefit justifies the additional resources required).Design, Setting, and Participants: Microsimulation modeling study of 65-year-old Medicare beneficiaries at average risk for colorectal cancer (CRC) and with an average life expectancy who underwent a screening colonoscopy at 55 years with negative results.Interventions: Colonoscopy screening as recommended by guidelines (ie, at 65 and 75 years) vs scenarios with a shorter screening interval (5 or 3 instead of 10 years) or in which screening was continued to 85 or 95 years.Main Outcomes and Measures: Quality-adjusted life-years (QALYs) gained (measure of net health benefit); additional colonoscopies required per additional QALY gained and additional costs per additional QALY gained (measures of efficiency).Results: Screening previously screened Medicare beneficiaries more intensively than recommended resulted in only small increases in CRC deaths prevented and life-years gained. In comparison, the increases in colonoscopies performed and colonoscopy-related complications experienced were large. As a result, all scenarios of more intensive screening than recommended resulted in a loss of QALYs, rather than a gain (ie, a net harm). The only exception was shortening the screening interval from 10 to 5 years, which resulted in 0.7 QALYs gained per 1000 beneficiaries. However, this scenario was inefficient because it required no less than 909 additional colonoscopies and an additional $711 000 per additional QALY gained. Results in previously unscreened beneficiaries were slightly less unfavorable, but conclusions were identical.Conclusions and Relevance: Screening Medicare beneficiaries more intensively than recommended is not only inefficient from a societal perspective; often it is also unfavorable for those being screened. This study provides evidence and a clear rationale for clinicians and policy makers to actively discourage this practice

    Effects of Increasing Screening Age and Fecal Hemoglobin Cutoff Concentrations in a Colorectal Cancer Screening Program

    No full text
    Background & Aims: Several countries have implemented programs to screen for colorectal cancer (CRC) by using the fecal immunochemical test (FIT). These programs vary considerably in age of the population screened and the cutoff concentration of fecal hemoglobin (Hb) used to identify candidates for further evaluation; these variations are usually based on a country’s colonoscopy resources. We calculated how increasing the Hb cutoff concentration and screening age affects colonoscopy yield, missed lesions, and demand. Methods: We collected data from 10,008 average-risk individuals in The Netherlands, 50–74 years old, who were invited for an FIT in the first round of a population-based CRC screening program from November 2006 through December 2008. Fecal samples were collected, and levels of Hb were measured by using the OC-sensor Micro analyzer; concentrations ≥10 μg Hb/g feces were considered positive. Subjects with a positive FIT were scheduled for colonoscopy within 4 weeks. Logistic regression analysis was performed to evaluate the association between age and detection of advanced neoplasia. Results: In total, 5986 individuals (62%) participated in the study; 503 (8.4%) had a positive test result. Attendance, positive test results, detection of advanced neoplasia, and the FIT’s positive predictive value all increased significantly with age (P < .001). Detection of advanced neoplasia ranged from 1.3% in the youngest age group to 6.2% in the oldest group; the positive predictive value of the FIT was 26% in the youngest group and 47% in the oldest group. Increasing the starting age of invitees from 50–74 years to 55–74 years reduced the proportion of subjects who underwent colonoscopy evaluation by 14% and resulted in 9% more subjects with advanced neoplasia being missed. Increasing the cutoff concentration from 10 to 15 μg Hb/g feces reduced the proportion of subjects who underwent colonoscopy evaluation by 11% and resulted in 6% of advanced neoplasia being missed. Conclusions: In an analysis of an average-risk screening population in The Netherlands, we found that detection of advanced neoplasia by FIT increases significantly with age and fecal Hb cutoff concentration. Increasing the cutoff concentration or screening age reduces the numbers of patients who undergo colonoscopy evaluation in FIT-based CRC screening programs. Our findings provide insight in these effects per age category and cutoff concentration and the consequences in terms of missed lesions
    corecore